我们提出了一种使用持久性同源性(pH)的新的更有效的方法,一种方法来比较两个数据集的拓扑,用于训练深度网络以在空中图像中描绘道路网络和显微镜扫描中的神经元过程。它的本质是一种新的过滤功能,从两个现有技术的融合导出:基于阈值的过滤,以前用于将深网络培训到分段医学图像,并用高度函数过滤,以便在比较2D和3D形状之前使用。我们通过实验证明,深入的网络培训了我们的持久性同源性的损失,即道路网络和神经元过程的重建,这些过程比现有的拓扑和非拓扑损失功能更好地保持原件的连接性。
translated by 谷歌翻译
Research connecting text and images has recently seen several breakthroughs, with models like CLIP, DALL-E 2, and Stable Diffusion. However, the connection between text and other visual modalities, such as lidar data, has received less attention, prohibited by the lack of text-lidar datasets. In this work, we propose LidarCLIP, a mapping from automotive point clouds to a pre-existing CLIP embedding space. Using image-lidar pairs, we supervise a point cloud encoder with the image CLIP embeddings, effectively relating text and lidar data with the image domain as an intermediary. We show the effectiveness of LidarCLIP by demonstrating that lidar-based retrieval is generally on par with image-based retrieval, but with complementary strengths and weaknesses. By combining image and lidar features, we improve upon both single-modality methods and enable a targeted search for challenging detection scenarios under adverse sensor conditions. We also use LidarCLIP as a tool to investigate fundamental lidar capabilities through natural language. Finally, we leverage our compatibility with CLIP to explore a range of applications, such as point cloud captioning and lidar-to-image generation, without any additional training. We hope LidarCLIP can inspire future work to dive deeper into connections between text and point cloud understanding. Code and trained models available at https://github.com/atonderski/lidarclip.
translated by 谷歌翻译
We study critical systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing. These systems often support communities disproportionately affected by systemic racial, gender, or other injustices, so it is crucial to design these systems with fairness considerations in mind. To address this problem, we propose a framework for evaluating fairness in contextual resource allocation systems that is inspired by fairness metrics in machine learning. This framework can be applied to evaluate the fairness properties of a historical policy, as well as to impose constraints in the design of new (counterfactual) allocation policies. Our work culminates with a set of incompatibility results that investigate the interplay between the different fairness metrics we propose. Notably, we demonstrate that: 1) fairness in allocation and fairness in outcomes are usually incompatible; 2) policies that prioritize based on a vulnerability score will usually result in unequal outcomes across groups, even if the score is perfectly calibrated; 3) policies using contextual information beyond what is needed to characterize baseline risk and treatment effects can be fairer in their outcomes than those using just baseline risk and treatment effects; and 4) policies using group status in addition to baseline risk and treatment effects are as fair as possible given all available information. Our framework can help guide the discussion among stakeholders in deciding which fairness metrics to impose when allocating scarce resources.
translated by 谷歌翻译
Few-shot learning is a rapidly evolving area of research in machine learning where the goal is to classify unlabeled data with only one or "a few" labeled exemplary samples. Neural networks are typically trained to minimize a distance metric between labeled exemplary samples and a query set. Early few-shot approaches use an episodic training process to sub-sample the training data into few-shot batches. This training process matches the sub-sampling done on evaluation. Recently, conventional supervised training coupled with a cosine distance has achieved superior performance for few-shot. Despite the diversity of few-shot approaches over the past decade, most methods still rely on the cosine or Euclidean distance layer between the latent features of the trained network. In this work, we investigate the distributions of trained few-shot features and demonstrate that they can be roughly approximated as exponential distributions. Under this assumption of an exponential distribution, we propose a new maximum log-likelihood metric for few-shot architectures. We demonstrate that the proposed metric achieves superior performance accuracy w.r.t. conventional similarity metrics (e.g., cosine, Euclidean, etc.), and achieve state-of-the-art inductive few-shot performance. Further, additional gains can be achieved by carefully combining multiple metrics and neither of our methods require post-processing feature transformations, which are common to many algorithms. Finally, we demonstrate a novel iterative algorithm designed around our maximum log-likelihood approach that achieves state-of-the-art transductive few-shot performance when the evaluation data is imbalanced. We have made our code publicly available at https://github.com/samuelhess/MLL_FSL/.
translated by 谷歌翻译
众所周知,具有重新激活函数的完全连接的前馈神经网络可以表示的参数化函数家族恰好是一类有限的分段线性函数。鲜为人知的是,对于Relu神经网络的每个固定架构,参数空间都允许对称的正维空间,因此,在任何给定参数附近的局部功能维度都低于参数维度。在这项工作中,我们仔细地定义了功能维度的概念,表明它在Relu神经网络函数的参数空间中是不均匀的,并继续进行[14]和[5]中的调查 - 何时在功能维度实现其理论时最大。我们还研究了从参数空间到功能空间的实现图的商空间和纤维,提供了断开连接的纤维的示例,功能尺寸为非恒定剂的纤维以及对称组在其上进行非转换的纤维。
translated by 谷歌翻译
物理受限的机器学习正在成为物理机器学习领域的重要主题。将物理限制纳入机器学习方法的最重要的优势之一是,由此产生的模型需要较少的数据训练。通过将物理规则纳入机器学习配方本身,预计预测将在物理上合理。高斯流程(GP)可能是小型数据集的机器学习中最常见的方法之一。在本文中,我们研究了在三个不同的材料数据集上限制具有单调性的GP公式的可能性,其中使用了一个实验和两个计算数据集。比较单调的GP与常规GP进行比较,该GP观察到后方差的显着降低。单调的GP在插值方面严格单调性,但是在外推方案中,随着训练数据集超越训练数据集,单调效应开始消失。与常规GP相比,GP对GP的单调性施加的精度为较小。单调的GP可能在数据稀缺和嘈杂的应用中最有用,并且由强有力的物理证据支持单调性。
translated by 谷歌翻译
尽管最近的研究集中在量化单词用法上以找到叙事情感弧的整体形状,但叙事中叙事的某些特征仍有待探索。在这里,我们通过找到单词用法中波动开始相关的文本长度来表征亚叙事的叙事时间尺度。我们代表30,000多个项目Gutenberg书籍作为时间序列使用OusiOmetrics,这是一个具有基本含义的功率破坏者框架,本身是对价价 - 宽松义务框架的重新解释,这些框架源自语义差异。我们使用经验模式分解将每本书的力量和危险时间序列分解为组成振荡模式和非振荡趋势的总和。通过将原始力量和危险时间序列的分解与从洗牌文本中得出的分解,我们发现较短的书籍仅显示出一般趋势,而较长的书籍除了一般趋势外,还具有波动,类似于子图在一个中的弧线中的弧线。总体叙事弧。这些波动通常有几千个单词的时期,无论书籍长度或库分类代码如何,但根据书的内容和结构而有所不同。我们的方法提供了一种数据驱动的denoisising方法,可用于各种长度的文本,与使用大型窗口尺寸的更传统的方法相反,该方法可能会无意中平滑相关信息,尤其是对于较短的文本而言。
translated by 谷歌翻译
基于相关的回声声音浮标收集的数据,这些浮标附带了热带海洋中的鱼聚集设备(DFAD),当前的研究应用机器学习方案来检查金枪鱼学校关联的时间趋势以漂移对象。使用二进制输出,将文献中通常使用的指标适应以下事实,即考虑到DFAD下的整个金枪鱼聚合。金枪鱼首次在25至43天之间进行了金枪鱼的中位时间,取决于海洋,最长的浸泡和殖民时间在太平洋中注册。金枪鱼学校的连续停留时间通常比连续缺勤时间(分别在5到7天和9天和11天之间)短,与以前的研究结果一致。使用回归输出,估计两个新型指标,即聚集时间和分解时间,以进一步了解聚集过程的对称性。在所有海洋中,金枪鱼聚合离开DFAD所需的时间并不比聚集形成所花费的时间大得多。讨论了这些结果在“生态陷阱”假设的背景下的价值,并提出了进一步的分析以丰富和利用该数据源。
translated by 谷歌翻译
在许多科学学科中,我们有兴趣推断一组观察到的时间序列的非线性动力学系统,这是面对混乱的行为和噪音,这是一项艰巨的任务。以前的深度学习方法实现了这一目标,通常缺乏解释性和障碍。尤其是,即使基本动力学生存在较低维的多种多样的情况下,忠实嵌入通常需要的高维潜在空间也会阻碍理论分析。在树突计算的新兴原则的推动下,我们通过线性样条基础扩展增强了动态解释和数学可牵引的分段线性(PL)复发性神经网络(RNN)。我们表明,这种方法保留了简单PLRNN的所有理论上吸引人的特性,但在相对较低的尺寸中提高了其近似任意非线性动态系统的能力。我们采用两个框架来训练该系统,一个将反向传播的时间(BPTT)与教师强迫结合在一起,另一个将基于快速可扩展的变异推理的基础。我们表明,树枝状扩展的PLRNN可以在各种动力学系统基准上获得更少的参数和尺寸,并与其他方法进行比较,同时保留了可拖动和可解释的结构。
translated by 谷歌翻译
蒙面自动编码已成为用于文本,图像和最近的点云的变压器模型的成功预训练范例。原始汽车数据集是适合自我监督预训练的合适候选者,因为与3D对象检测(OD)等任务的注释相比,它们通常便宜地收集。但是,开发点云的蒙版自动编码器仅关注合成和室内数据。因此,现有方法已将其表示和模型定制为小,密度且具有均匀点密度的点云。在这项工作中,我们在汽车环境中研究了蒙版的自动编码,该自动编码是稀疏的,并且点密度在同一场景中的对象之间可能会大不相同。为此,我们提出了Voxel-MAE,这是一种为体素表示设计的简单掩盖自动编码预训练方案。我们将基于变压器的3D对象检测器的骨干培养为重建掩盖的体素并区分空的和非空的体素。我们的方法将3D OD性能提高了1.75个地图点和1.05 nds的NUSCENES数据集。与现有的汽车数据自我监督方法相比,Voxel-Mae显示出$ 2 \ times $ $的性能提高。此外,我们表明,通过对Voxel-Mae进行预训练,我们仅需要40%的注释数据即可超过随机初始化的等效物。代码将发布。
translated by 谷歌翻译